159 research outputs found
Two-Bit Bit Flipping Decoding of LDPC Codes
In this paper, we propose a new class of bit flipping algorithms for
low-density parity-check (LDPC) codes over the binary symmetric channel (BSC).
Compared to the regular (parallel or serial) bit flipping algorithms, the
proposed algorithms employ one additional bit at a variable node to represent
its "strength." The introduction of this additional bit increases the
guaranteed error correction capability by a factor of at least 2. An additional
bit can also be employed at a check node to capture information which is
beneficial to decoding. A framework for failure analysis of the proposed
algorithms is described. These algorithms outperform the Gallager A/B algorithm
and the min-sum algorithm at much lower complexity. Concatenation of two-bit
bit flipping algorithms show a potential to approach the performance of belief
propagation (BP) decoding in the error floor region, also at lower complexity.Comment: 6 pages. Submitted to IEEE International Symposium on Information
Theory 201
Lower Bounds on the Redundancy of Huffman Codes with Known and Unknown Probabilities
In this paper we provide a method to obtain tight lower bounds on the minimum
redundancy achievable by a Huffman code when the probability distribution
underlying an alphabet is only partially known. In particular, we address the
case where the occurrence probabilities are unknown for some of the symbols in
an alphabet. Bounds can be obtained for alphabets of a given size, for
alphabets of up to a given size, and for alphabets of arbitrary size. The
method operates on a Computer Algebra System, yielding closed-form numbers for
all results. Finally, we show the potential of the proposed method to shed some
light on the structure of the minimum redundancy achievable by the Huffman
code
An overview of JPEG 2000
JPEG-2000 is an emerging standard for still image compression. This paper provides a brief history of the JPEG-2000 standardization process, an overview of the standard, and some description of the capabilities provided by the standard. Part I of the JPEG-2000 standard specifies the minimum compliant decoder, while Part II describes optional, value-added extensions. Although the standard specifies only the decoder and bitstream syntax, in this paper we describe JPEG-2000 from the point of view of encoding. We take this approach, as we believe it is more amenable to a compact description more easily understood by most readers.
On Trapping Sets and Guaranteed Error Correction Capability of LDPC Codes and GLDPC Codes
The relation between the girth and the guaranteed error correction capability
of -left regular LDPC codes when decoded using the bit flipping (serial
and parallel) algorithms is investigated. A lower bound on the size of variable
node sets which expand by a factor of at least is found based on
the Moore bound. An upper bound on the guaranteed error correction capability
is established by studying the sizes of smallest possible trapping sets. The
results are extended to generalized LDPC codes. It is shown that generalized
LDPC codes can correct a linear fraction of errors under the parallel bit
flipping algorithm when the underlying Tanner graph is a good expander. It is
also shown that the bound cannot be improved when is even by studying
a class of trapping sets. A lower bound on the size of variable node sets which
have the required expansion is established.Comment: 17 pages. Submitted to IEEE Transactions on Information Theory. Parts
of this work have been accepted for presentation at the International
Symposium on Information Theory (ISIT'08) and the International Telemetering
Conference (ITC'08
Error Correction Capability of Column-Weight-Three LDPC Codes: Part II
The relation between the girth and the error correction capability of
column-weight-three LDPC codes is investigated. Specifically, it is shown that
the Gallager A algorithm can correct errors in iterations on a
Tanner graph of girth .Comment: 7 pages, 7 figures, submitted to IEEE Transactions on Information
Theory (July 2008
Standard and specific compression techniques for DNA microarray images
We review the state of the art in DNA microarray image compression and provide original comparisons between standard and microarray-specific compression techniques that validate and expand previous work. First, we describe the most relevant approaches published in the literature and classify them according to the stage of the typical image compression process where each approach makes its contribution, and then we summarize the compression results reported for these microarray-specific image compression schemes. In a set of experiments conducted for this paper, we obtain new results for several popular image coding techniques that include the most recent coding standards. Prediction-based schemes CALIC and JPEG-LS are the best-performing standard compressors, but are improved upon by the best microarray-specific technique, Battiato's CNN-based scheme
- …